Tutorial � Rolls

B&B - Vision II

Infero-temporal cortex and visual agnosias

 

weiskrantz + saunders

can learn object discriminations without your IT

but monkeys are slower

couldn�t do the transforms + invariance

supported by neurophysiology

 

 

holmes and gross

 

prefer vertical pen to horzontal pen � rely only on whether it�s vertical?

 

we don�t know to what extent higher primates �see� things as being anything

 

 

 

new reference

paper in Neuron � by Rolls � to be published in the next year

re neurophysiology of invariance

no translation invriance in V1, but you do in IT � done it for faces, objects

1998 Rolls & Booth, incl view-invariant

10 objects in the monkey�s cage

some Ns fire for the same object in different views

no longer think of things as being 3D object-centred, because theory suggests 2D views

lumination invariant � few experiments done so far

colour � respond similarly in grey scale

small variations in shape �

Tanaka � perspective invariance, not for real objects, altered perspectives of outline shapes, showing that the IT neurons still fired whatever shape you use

occlusion + completeness � often respond as the same object

texture � spatial frequency

fourier analysis of face to eliminate low frequencies � would like like a pencil drawing, so you�d lose all the bits where there�s no change, e.g. surfaces

opposite � blurred � cells also respond, show invariances even if thre�s no overlap = spatial frequency invariance

size � mean size tuning (12.5 times size, and still carry information about the faces according to the half-amplitude statistical measure)

= solving a massive computational problem in 4 stages

in <16fps, = approx 90ms to get to the IT, = 50ms of cortical processing time

= about 15ms per stage

neural network � integrate and fire � 17ms

Visnet demonstrates the computability in a 4-layer network � doesn�t store enough variations of every size/shape

Wallace & Rolls on Visnet (1997?), new journal in Neural Computation

 

we know how the information is represented in it

can read off the activity of a handful of cells, relate back to stimulus

by knowing what the firing rate is, can guess from the average firing rate which of 8 possible stimuli it was

= obviously different for every animal

every new neuron linearly adds to the information, so every N carries independent information � can be decoded by doing the neuronal dot product (Rolls et al 1997, NN&BF)

 

doesn�t matter which cells you take � works for random selection

 

done with faces + objects

10 objects from 4 views in the monkey�s cage

 

dot product of firing rate of cells with the average firing to every stimulus ever tested, can tell which stimulus it was

don�t take into account the temporal aspects

which are crucial at this level for reading it

 

I can�t do that for a new brain � is this a pseudo-problem?

IT orbitofrontal, amygdala, BG, so they understand the readout

well, how does the orbitofrontal cortex read it out � pattern associator learns it and can read the code, and can do a visual/taste co-ordination

what else do you need? have we understood how the code can be used? need we posit a homonculus?

 

how does that work for complex composite objects?

what if you have a visual scene? do different objects interfere with each other

single object ID in parallel, serial ID of objects in scene

if 2 objects: weight towards the fovea

translation invariance will in a real scene � if it is really translation invariant in the IT, how do you know which is the target for your action

background invariance

does the receptive field get smaller?

 

why is this code invariant?

why not have one size/position/etc. coded by one set of neurons?

would need to bring them together to compare object at different sizes � pattern associator wouldn�t recognise the same object in different positions and that wouldn�t be adaptive

need to compute the object before sending it off to an area which decides what to do with it

processing/storage requirements

 

we know a bit from lesion and single-cell studies

 

Agnosias

apperceptive � can�t copy or match

do you have enough information coming into your spatial discriminator?

is it a reception or a form analysis problem?

 

associative

discrimination of objects = good, can copy + match

can�t access the semantic/naming level

if semantics without naming = anomia (need to show that they do first have the concept to distinguish it from a general semantic loss)

do associative agnosics have an IT?

is it the output from that that is impaired?

compare it with functioning IT-orbitofrontal transmission

weiskrantz study � can they do that?

whether they could match/copy different sizes/views of objects � say whether or not they�re the same?

if they can, it�s a disconnection at a higher level

this hasn�t been done

 

 

 

IT in humans = fusiform face area

 

low

normal visual acuity

reception

form analysis

high

 

simultanagnosia � usually higher level damage to parietal, re < visual STM (but not auditory STM problem)

 

 

has there been a Brain & Emotion review?

 

next week � Dorsal (parietal)

what are the functions of the parietal cortex?

 

Kandel � Principles (esp Richard Anderson)

get rolls to email us

 

rewrite this essay